Multilayer Perceptron Training with Inaccurate Derivative Information

نویسنده

  • Jouko Lampinen
چکیده

In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many well-known rules about the eeect of control variables to the target variables. With the presented algorithm the basically data driven neural network model can be trained to comply with these a priori rules, making the models more correct and decreasing the amount of required training data. Since the training of the rules is based on statistical error minimization, the rules may be numerically inaccurate or contradictory. This makes the collection and maintenace of the rule bases much less expensive than in rule based expert systems. Currently we are incorporating the derivative based training into a commercial neural network process control tool.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multilayer Perceptron Training

In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many well-known rules about the eeect of control variables to the target variables. With the presented algorithm the basically data driven neural network model can ...

متن کامل

Experiments on Regularizing MLP

In this contribution we present results of using possibly inaccurate knowledge of model derivatives as part of the training data for a multilayer perceptron network (MLP). Even simple constraints ooer signiicant improvements and the resulting models give better prediction performance than traditional data driven MLP models.

متن کامل

Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training

We analyze the effects of analog noise on the synaptic arithmetic during multilayer perceptron training, by expanding the cost function to include noise-mediated terms. Predictions are made in the light of these calculations that suggest that fault tolerance, training quality and training trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct cla...

متن کامل

Analysis of Optimization Techniques for Feed Forward Neural Networks Based Image Compression

This paper reviews various optimization techniques available for training multi-layer perception (MLP) artificial neural networks for compression of images. These optimization techniques can be classified into two categories: Derivative-based and Derivative free optimization. The former is based on the calculation of gradient and includes Gradient Descent, Conjugate gradient, Quasi-Newton, Leve...

متن کامل

Analogue Synaptic Noise - Implications And Learning Improvements

We analyse the effects of analogue noise on the synaptic arithmetic during multilayer perceptron training by expanding the cost function to include noise-mediated penalty terms. Predictions are made in the light of these calculations which suggest that fault tolerance, generalisation ability and learning trajectory should be improved by such noise-injection. Extensive simulation experiments on ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995